75 research outputs found

    Usable Security: Why Do We Need It? How Do We Get It?

    Get PDF
    Security experts frequently refer to people as “the weakest link in the chain” of system security. Famed hacker Kevin Mitnick revealed that he hardly ever cracked a password, because it “was easier to dupe people into revealing it” by employing a range of social engineering techniques. Often, such failures are attributed to users’ carelessness and ignorance. However, more enlightened researchers have pointed out that current security tools are simply too complex for many users, and they have made efforts to improve user interfaces to security tools. In this chapter, we aim to broaden the current perspective, focusing on the usability of security tools (or products) and the process of designing secure systems for the real-world context (the panorama) in which they have to operate. Here we demonstrate how current human factors knowledge and user-centered design principles can help security designers produce security solutions that are effective in practice

    Stakeholder involvement, motivation, responsibility, communication: How to design usable security in e-Science

    Get PDF
    e-Science projects face a difficult challenge in providing access to valuable computational resources, data and software to large communities of distributed users. Oil the one hand, the raison d'etre of the projects is to encourage members of their research communities to use the resources provided. Oil the other hand, the threats to these resources from online attacks require robust and effective Security to mitigate the risks faced. This raises two issues: ensuring that (I) the security mechanisms put in place are usable by the different users of the system, and (2) the security of the overall system satisfies the security needs of all its different stakeholders. A failure to address either of these issues call seriously jeopardise the success of e-Science projects.The aim of this paper is to firstly provide a detailed understanding of how these challenges call present themselves in practice in the development of e-Science applications. Secondly, this paper examines the steps that projects can undertake to ensure that security requirements are correctly identified, and security measures are usable by the intended research community. The research presented in this paper is based Oil four case studies of c-Science projects. Security design traditionally uses expert analysis of risks to the technology and deploys appropriate countermeasures to deal with them. However, these case studies highlight the importance of involving all stakeholders in the process of identifying security needs and designing secure and usable systems.For each case study, transcripts of the security analysis and design sessions were analysed to gain insight into the issues and factors that surround the design of usable security. The analysis concludes with a model explaining the relationships between the most important factors identified. This includes a detailed examination of the roles of responsibility, motivation and communication of stakeholders in the ongoing process of designing usable secure socio-technical systems such as e-Science. (C) 2007 Elsevier Ltd. All rights reserved

    Integrating security and usability into the requirements and design process

    Get PDF
    According to Ross Anderson, 'Many systems fail because their designers protect the wrong things or protect the right things in the wrong way'. Surveys also show that security incidents in industry are rising, which highlights the difficulty of designing good security. Some recent approaches have targeted security from the technological perspective, others from the human–computer interaction angle, offering better User Interfaces (UIs) for improved usability of security mechanisms. However, usability issues also extend beyond the user interface and should be considered during system requirements and design. In this paper, we describe Appropriate and Effective Guidance for Information Security (AEGIS), a methodology for the development of secure and usable systems. AEGIS defines a development process and a UML meta-model of the definition and the reasoning over the system's assets. AEGIS has been applied to case studies in the area of Grid computing and we report on one of these

    The development of secure and usable systems.

    Get PDF
    "People are the weakest link in the security chain"---Bruce Schneier. The aim of the thesis is to investigate the process of designing secure systems, and how designers can ensure that security mechanisms are usable and effective in practice. The research perspective is one of security as a socio-technical system. A review of the literature of security design and Human Computer Interactions in Security (HCISec) reveals that most security design methods adopt either an organisational approach, or a technical focus. And whilst HCISec has identified the need to improve usability in computer security, most of the current research in this area is addressing the issue by improving user interfaces to security tools. Whilst this should help to reduce users' errors and workload, this approach does not address problems which arise from the difficulty of reconciling technical requirements and human factors. To date, little research has been applied to socio-technical approaches to secure system design methods. Both identifying successful socio-technical design approaches and gaining a better understanding of the issues surrounding their application is required to address this gap. Appropriate and Effective Guidance for Information Security (AEGIS) is a socio-technical secure system development methodology developed for this purpose. It takes a risk-based approach to security design and focuses on recreating the contextual information surrounding the system in order to better inform security decisions, with the aim of making these decisions better suited to users' needs. AEGIS uses a graphical notation defined in the UML Meta-Object Facility to provide designers with a familiar and well- supported means of building models. Grid applications were selected as the area in which to apply and validate AEGIS. Using the research methodology Action Research, AEGIS was applied to a total of four Grid case studies. This allowed in the first instance the evaluation and refinement of AEGIS on real- world systems. Through the use of the qualitative data analysis methodology Grounded Theory, the design session transcripts gathered from the Action Research application of AEGIS were then further analysed. The resulting analysis identified important factors affecting the design process - separated into categories of responsibility, motivation, stakeholders and communication. These categories were then assembled into a model informing the factors and issues that affect socio-technical secure system design. This model therefore provides a key theoretical insight into real-world issues and is a useful foundation for improving current practice and future socio-technical secure system design methodologies

    Divide and conquer: The role of trust and assurance in the design of socio-technical systems

    Get PDF
    In order to be effective, secure systems need to be both correct (i.e. effective when used as intended) and dependable (i.e. actually being used as intended). Given that most secure systems involve people, a strategy for achieving dependable security must address both people and technology. Current research in Human-Computer Interactions in Security (HCISec) aims to increase dependability of the human element by reducing mistakes (e.g. through better user interfaces to security tools). We argue that a successful strategy also needs to consider the impact of social interaction on security, and in this respect trust is a central concept. We compare the understanding of trust in secure systems with the more differentiated models of trust in social science research. The security definition of "trust" turns out to map onto strategies that would be correctly described as "assurance" in the more differentiated model. We argue that distinguishing between trust and assurance yields a wider range of strategies for ensuring dependability of the human element in a secure socio-technical system. Furthermore, correctly placed trust can also benefit an organisation's culture and performance. We conclude by presenting design principles to help security designers decide "when to trust" and "when to assure", and give examples of how both strategies would be implemented in practice

    Bringing security home: A process for developing secure and usable systems

    Get PDF
    The aim of this paper is to provide better support for the development of secure systems. We argue that current development practice suffers from two key problems: 1. Security requirements tend to be kept separate from other system requirements, and not integrated into any overall strategy. 2. The impact of security measures on users and the operational cost of these measures on a day-to-day basis are usually not considered. Our new paradigm is the full integration of security and usability concerns into the software development process, thus enabling developers to build secure systems that work in the real world. We present AEGIS, a secure software engineering method which integrates asset identification, risk and threat analysis and context of use, bound together through the use of UML, and report its application to case studies on Grid projects. An additional benefit of the method is that the involvement of stakeholders in the high-level security analysis improves their understanding of security, and increases their motivation to comply with policies

    The different roles of ‘design process champions’ for digital libraries in African higher education

    Get PDF
    The concept of design stakeholders is central to effective design of digital libraries. We report on research findings that identified the presence of a key subset of stakeholders which we term ‘design process champions’. Our findings have identified that these champions can change interaction patterns and the eventual output of the other stakeholders (project participants) in the design process of digital library projects. This empirical research is based upon 38 interviews with key stakeholders and a review of documentary evidence in ten innovative digital library design projects (e.g. mobile clinical libraries) located in three African universities in Kenya, Uganda and South Africa. Through a grounded theory approach two different types of the ‘design process champions’ emerged from the data with varying levels of effectiveness in the design process: (i) domain champions and (ii) multidisciplinary champions. The domain champions assume a ‘siloed’ approach of engagement while the multidisciplinary champions take on a participatory engagement throughout the design process. A discussion of the implications of information specialists functioning as domain champions is highlighted. We conclude by suggesting that the multidisciplinary champions’ approach is particularly useful in supporting sustainability of digital library design projects

    Eliciting Persona Characteristics for Risk Based Decision Making

    Get PDF
    Personas are behavioural specifications of archetypical users in Human Factors Engineering and User Interaction research aimed at preventing biased views system designers may have of users. Personas are therefore nuanced representations of goals and expectations that should be addressed when designing systems. Previous work has shown how personas may be validated by grounding in qualitative models; however, more evidence is needed on the applicability for grounding models in risk decision making research. We present an approach for eliciting persona characteristics for risk-based decision making by using Observe Orient Decide Act (OODA) as a modelling baseline. The approach illustrates how modelling personas based on decision makers’ understanding of risk facilitates designing for risk and uncertainty
    • 

    corecore